Daniël Schobben's Blind Signal Separation page
Introduction
The goal of blind signal separation is to recover estimates of the sources signal from
observed mixtures of them. The case of separating instantaneously mixed signals has
been addressed by many researchers. The context of acoustical
applications where the observations are convolutive mixtures
however requires convolutive unmixing. Recently, a few researches have found
ways to do this, mostly based on
extensions of the approaches in the instantaneous case, implemented in time
or frequency
domain. An approach different from the ones mentioned above, is to perform blind signal
separation using only second order statistics (i.e. the covariance matrix of the
observations). Dominic Chan showed how this can be done for short filters (see below). The
Convolutive Blind Signal Separation algorithm
I recently developed ('CoBliSS') an extension of this algorithm which is implemented in
the frequency domain.
CoBliSS is able to deal with large filters. The next demo shows
its ability to find an unmixing system which has a good performance.
BSS demo - processing synthetic sound tracks.
In this demo, two music signals were mixed using real measured acoustical impulsresponses
of 128 tabs. FIR filters are used for signal separation, also of length 128.
CoBliSS is used is batch mode in this experiment; first the second order
statistics of the data are estimated. From this the filters are calculated in the
frequency domain. All audio files are in 16 bits wav-format, 22kHz.
BSS demo - real recorded sound with live speakers.
Experiments were done with audio recorded in a real acoustical
environment. The room which is used for the recordings was 3.4 x 3.8 x 5.2 m
(height x width x depth).
Two persons read 4 sentences aloud.
The loudspeaker is silent in this experiment.
The resulting sound was recorded by two microphones which were spaced 58 cm apart.
The recordings are 16 bit, 24kHz. The separation filters are of length 512 and are
controlled by the CoBliSS algorithm.
Next, this experiment is repeated with
also playing French radio news over a small loudspeaker.
The resulting sound was recorded by two microphones which were spaced 58 cm apart.
The recordings are 16 bit, 24kHz. The separation filters are of length 512 and are
controlled by the CoBliSS algorithm.
The far end speech was used as a third (regular) input for CoBliSS.
The adaptive algorithm converges quite fast. For the sake of computational
complexity, only one update per every 2560 samples is done.
In this experiment, the algorithm converges to a good solution within 0.25 second.
The great advantage of this approach
is that is gives good "echo cancellation" and it does not care at all about double
talk.
Extended CoBliSS
Using the far end signal as an additional input without exploiting the fact that
this already is a source results in a higher computational complexity than necessary.
Therefore the extended CoBliSS algorithm is developed which has a
significantly lower computational complexity.
Processing the same date as in the previous data with this extended algorithm moreover yields
results of superior quality:
Details about the CoBliSS and ECoBliSS algorithm can be found
here.
Matlab code for the CoBliSS and ECoBliSS algorithm can be found
here.
Some nice links to other people working on BSS
Last modified: 07-29-1999